3,980 research outputs found
Sensing and describing 3-D structure
Discovering the three dimensional structure of an object is important for a variety of robot tasks. Single sensor systems such as machine vision systems cannot reliably compute three dimensional structure in unconstrained environments. Active, exploratory tactile sensing can be used to complement passive stereo vision data to derive robust surface and feature descriptions of objects. The control for tactile sensing is provided by the vision system which provides regions of interest that the tactile system can explore. The descriptions of surfaces and features are accurate and can be used in a later matching phase against a model data base of objects to identify the object and its position and orientation in space
Mapping haptic exploratory procedures to multiple shape representations
Research in human haptics has revealed a number of exploratory procedures (EPs) that are used in determining attributes on an object, particularly shape. This research has been used as a paradigm for building an intelligent robotic system that can perform shape recognition from touch sensing. In particular, a number of mappings between EPs and shape modeling primitives have been found. The choice of shape primitive for each EP is discussed, and results from experiments with a Utah-MIT dextrous hand system are presented. A vision algorithm to complement active touch sensing for the task of autonomous shape recovery is also presented
Recommended from our members
A robotic system for 3D model acquisition from multiple range images
This paper describes a robotic system that builds a 3D CAD model of an object incrementally from multiple range images. It motivates the generation of a solid model at each stage of the modeling process, allowing the use of well-defined geometric algorithms to perform the merging and integration task. The data from each imaging operation is represented by a mesh, which is then extruded in the viewing direction to form a solid model. These solids are merged as they are acquired into a composite model of the object. We describe an algorithm that builds a solid model from a mesh surface and present experimental results of reconstructing a complex object. In addition, we discuss an approach to completely automating the model acquisition process by integration with previous sensor-planning results
Constraint-based sensor planning for scene modeling
We describe an automated scene modeling system that consists of two components operating in an interleaved fashion: an incremental modeler that builds solid models from range imagery; and a sensor planner that analyzes the resulting model and computes the next sensor position. This planning component is target-driven and computes sensor positions using model information about the imaged surfaces and the unexplored space in a scene. The method is shape-independent and uses a continuous-space representation that preserves the accuracy of sensed data. It is able to completely acquire a scene by repeatedly planning sensor positions, utilizing a partial model to determine volumes of visibility for contiguous areas of unexplored scene. These visibility volumes are combined with sensor placement constraints to compute sets of occlusion-free sensor positions that are guaranteed to improve the quality of the model. We show results for the acquisition of a scene that includes multiple, distinct objects with high occlusion
Visual control of grasping and manipulation tasks
This paper discusses the problem of visual control of grasping. We have implemented an object tracking system that can be used to provide visual feedback for locating the positions of fingers and objects to be manipulated, as well as the relative relationships of them. This visual analysis can be used to control open loop grasping systems in a number of manipulation tasks where the finger contact, object movement, and task completion need to be monitored and controlled
Robot learning of everyday object manipulations via human demonstration
We deal with the problem of teaching a robot to manipulate everyday objects through human demonstration. We first design a task descriptor which encapsulates important elements of a task. The design originates from observations that manipulations involved in many everyday object tasks can be considered as a series of sequential rotations and translations, which we call manipulation primitives. We then propose a method that enables a robot to decompose a demonstrated task into sequential manipulation primitives and construct a task descriptor. We also show how to transfer a task descriptor learned from one object to similar objects. In the end, we argue that this framework is highly generic. Particularly, it can be used to construct a robot task database that serves as a manipulation knowledge base for a robot to succeed in manipulating everyday objects
Recommended from our members
Acquisition and Interpretation of 3-D Sensor Data from Touch
Acquisition of 3-D scene information has focused on either passive 2-D imaging methods (stereopsis, structure from motion etc.) or 3-D range sensing methods (structured lighting, laser scanning etc.). Little work has been done in using active touch sensing with a multi-fingered robotic hand to acquire scene descriptions, even though it is a well developed human capability. Touch sensing differs from other more passive sensing modalities such as vision in a number of ways. A multi-fingered robotic hand with touch sensors can probe, move, and change its environment. This imposes a level of control on the sensing that makes it typically more difficult than traditional passive sensors in which active control is not an issue. Secondly, touch sensing generates far less data than vision methods; this is especially intriguing in light of psychological evidence that shows humans can recover shape and a number of other object attributes very reliably using touch alone. Future robotic systems will need to use dextrous robotic hands for tasks such as grasping, manipulation, assembly, inspection and object recognition. This paper describes our use of touch sensing as part of a larger system we are building for 3-D shape recovery and object recognition using touch and vision methods. It focuses on three exploratory procedures we have built to acquire and interpret sparse 3-D touch data: grasping by containment, planar surface exploration and surface contour exploration. Experimental results for each of these procedures are presented
Interactive sensor planning
This paper describes an interactive sensor planning system, that can be used to select viewpoints subject to camera visibility, field of view and task constraints. Application areas for this method include surveillance planning, safety monitoring, architectural site design planning, and automated site modeling. Given a description, of the sensor's characteristics, the objects in the 3-D scene, and the targets to be viewed, our algorithms compute the set of admissible view points that satisfy the constraints. The system first builds topologically correct solid models of the scene from a variety of data sources. Viewing targets are then selected, and visibility volumes and field of view cones are computed and intersected to create viewing volumes where cameras can be placed. The user can interactively manipulate the scene and select multiple target features to be viewed by a camera. The user can also select candidate viewpoints within this volume to synthesize views and verify the correctness of the planning system. We present experimental results for the planning system on an actual complex city model
Recommended from our members
Vision for mobile robot localization in urban environments
This paper addresses the problem of mobile robot localization in urban environments. Typically, GPS is the preferred sensor for outdoor operation. However, using GPS-only localization methods leads to significant performance degradation in urban areas where tall nearby structures obstruct the clear view of the satellites. In our work, we use vision-based techniques to supplement GPS and odometry and provide accurate localization. The vision system identifies prominent linear features in the scene and matches them with a reduced model of nearby buildings, yielding improved pose estimation of the robot
- …